在过去的几年中,对抗性示例的检测一直是一个热门话题,因为它对于在关键应用程序中安全部署机器学习算法的重要性。但是,通常通过假设一个隐式已知的攻击策略来验证检测方法,这不一定要考虑现实生活中的威胁。确实,这可能导致对检测器性能的过度评估,并可能在竞争检测方案之间的比较中引起一些偏见。我们提出了一个新型的多武器框架,称为Mead,用于根据几种攻击策略来评估探测器,以克服这一限制。其中,我们利用三个新目标来产生攻击。所提出的性能指标基于最坏的情况:仅当正确识别所有不同攻击时,检测才成功。从经验上讲,我们展示了方法的有效性。此外,最先进的探测器获得的表现不佳,为一项新的令人兴奋的研究开放。
translated by 谷歌翻译
对抗性的鲁棒性已成为机器学习越来越兴趣的话题,因为观察到神经网络往往会变得脆弱。我们提出了对逆转防御的信息几何表述,并引入Fire,这是一种针对分类跨透明镜损失的新的Fisher-Rao正则化,这基于对应于自然和受扰动输入特征的软磁输出之间的测量距离。基于SoftMax分布类的信息几何特性,我们为二进制和多类案例提供了Fisher-Rao距离(FRD)的明确表征,并绘制了一些有趣的属性以及与标准正则化指标的连接。此外,对于一个简单的线性和高斯模型,我们表明,在精度 - 舒适性区域中的所有帕累托最佳点都可以通过火力达到,而其他最先进的方法则可以通过火灾。从经验上讲,我们评估了经过标准数据集拟议损失的各种分类器的性能,在清洁和健壮的表现方面同时提高了1 \%的改进,同时将培训时间降低了20 \%,而不是表现最好的方法。
translated by 谷歌翻译
由于共享数据也可能揭示敏感信息,因此数据收集爆炸提高了用户的严重隐私问题。隐私保留机制的主要目标是防止恶意第三方推断敏感信息,同时保持共享数据有用。在本文中,我们在时间序列数据和智能仪表(SMS)功耗测量的背景下研究了这个问题。虽然私有和释放变量之间的互信息(MI)已被用作常见的信息理论隐私度测量,但它无法捕获功耗时间序列数据中存在的因果时间依赖性。为了克服这种限制,我们将定向信息(DI)介绍在所考虑的环境中的一种更有意义的隐私措施,并提出了一种新的损失功能。然后使用对抗的侵犯框架进行优化,其中两个经常性神经网络(RNN),称为释放器和对手,受到相反的目标训练。我们对攻击者可以访问释放器使用的所有培训数据集的最坏情况下,从SMS测量中的实证研究从SMS测量,验证所提出的方法并显示隐私和实用程序之间的现有权衡。
translated by 谷歌翻译
智能仪表(SMS)能够几乎实时与实用程序提供者的功耗。这些细粒度的信号携带有关用户的敏感信息,从隐私观点提出了严重问题。在本文中,我们专注于实时隐私威胁,即尝试以在线方式从短信数据推断敏感信息的潜在攻击者。我们采用信息理论隐私措施,并表明它有效地限制了任何攻击者的表现。然后,我们提出了一种普遍的制定来设计一种私有化机制,可以通过向SMS测量增加最小的失真量来提供目标水平。另一方面,为了应对不同的应用,考虑灵活的失真度量。该配方导致一般损失函数,其使用深入学习的对抗性框架进行了优化,其中两个神经网络 - 被称为释放器和对手 - 受到相反的目标训练。然后执行详尽的经验研究以验证所提出的方法的性能,并将其与最先进的方法进行比较,以便占用检测隐私问题。最后,我们还研究了释放者和攻击者之间数据不匹配的影响。
translated by 谷歌翻译
Numerous works use word embedding-based metrics to quantify societal biases and stereotypes in texts. Recent studies have found that word embeddings can capture semantic similarity but may be affected by word frequency. In this work we study the effect of frequency when measuring female vs. male gender bias with word embedding-based bias quantification methods. We find that Skip-gram with negative sampling and GloVe tend to detect male bias in high frequency words, while GloVe tends to return female bias in low frequency words. We show these behaviors still exist when words are randomly shuffled. This proves that the frequency-based effect observed in unshuffled corpora stems from properties of the metric rather than from word associations. The effect is spurious and problematic since bias metrics should depend exclusively on word co-occurrences and not individual word frequencies. Finally, we compare these results with the ones obtained with an alternative metric based on Pointwise Mutual Information. We find that this metric does not show a clear dependence on frequency, even though it is slightly skewed towards male bias across all frequencies.
translated by 谷歌翻译
Reinforcement learning is a machine learning approach based on behavioral psychology. It is focused on learning agents that can acquire knowledge and learn to carry out new tasks by interacting with the environment. However, a problem occurs when reinforcement learning is used in critical contexts where the users of the system need to have more information and reliability for the actions executed by an agent. In this regard, explainable reinforcement learning seeks to provide to an agent in training with methods in order to explain its behavior in such a way that users with no experience in machine learning could understand the agent's behavior. One of these is the memory-based explainable reinforcement learning method that is used to compute probabilities of success for each state-action pair using an episodic memory. In this work, we propose to make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks that need to be first addressed to solve a more complex task. The end goal is to verify if it is possible to provide to the agent the ability to explain its actions in the global task as well as in the sub-tasks. The results obtained showed that it is possible to use the memory-based method in hierarchical environments with high-level tasks and compute the probabilities of success to be used as a basis for explaining the agent's behavior.
translated by 谷歌翻译
In recent years, unmanned aerial vehicle (UAV) related technology has expanded knowledge in the area, bringing to light new problems and challenges that require solutions. Furthermore, because the technology allows processes usually carried out by people to be automated, it is in great demand in industrial sectors. The automation of these vehicles has been addressed in the literature, applying different machine learning strategies. Reinforcement learning (RL) is an automation framework that is frequently used to train autonomous agents. RL is a machine learning paradigm wherein an agent interacts with an environment to solve a given task. However, learning autonomously can be time consuming, computationally expensive, and may not be practical in highly-complex scenarios. Interactive reinforcement learning allows an external trainer to provide advice to an agent while it is learning a task. In this study, we set out to teach an RL agent to control a drone using reward-shaping and policy-shaping techniques simultaneously. Two simulated scenarios were proposed for the training; one without obstacles and one with obstacles. We also studied the influence of each technique. The results show that an agent trained simultaneously with both techniques obtains a lower reward than an agent trained using only a policy-based approach. Nevertheless, the agent achieves lower execution times and less dispersion during training.
translated by 谷歌翻译
The widespread use of information and communication technology (ICT) over the course of the last decades has been a primary catalyst behind the digitalization of power systems. Meanwhile, as the utilization rate of the Internet of Things (IoT) continues to rise along with recent advancements in ICT, the need for secure and computationally efficient monitoring of critical infrastructures like the electrical grid and the agents that participate in it is growing. A cyber-physical system, such as the electrical grid, may experience anomalies for a number of different reasons. These may include physical defects, mistakes in measurement and communication, cyberattacks, and other similar occurrences. The goal of this study is to emphasize what the most common incidents are with power systems and to give an overview and classification of the most common ways to find problems, starting with the consumer/prosumer end working up to the primary power producers. In addition, this article aimed to discuss the methods and techniques, such as artificial intelligence (AI) that are used to identify anomalies in the power systems and markets.
translated by 谷歌翻译
Vision Transformers (ViTs) have become a dominant paradigm for visual representation learning with self-attention operators. Although these operators provide flexibility to the model with their adjustable attention kernels, they suffer from inherent limitations: (1) the attention kernel is not discriminative enough, resulting in high redundancy of the ViT layers, and (2) the complexity in computation and memory is quadratic in the sequence length. In this paper, we propose a novel attention operator, called lightweight structure-aware attention (LiSA), which has a better representation power with log-linear complexity. Our operator learns structural patterns by using a set of relative position embeddings (RPEs). To achieve log-linear complexity, the RPEs are approximated with fast Fourier transforms. Our experiments and ablation studies demonstrate that ViTs based on the proposed operator outperform self-attention and other existing operators, achieving state-of-the-art results on ImageNet, and competitive results on other visual understanding benchmarks such as COCO and Something-Something-V2. The source code of our approach will be released online.
translated by 谷歌翻译
Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.
translated by 谷歌翻译